I'm going to talk about this project.
This is hopefully going to be a more thorough version of what I talked about at SC 24 because
I think I had like 20 minutes there and it was a little bit tight.
So I'll tell you about this project, which predates me and I'll explain the history and
the contributors and how we've used it.
So I started working on this project when I was at Intel and I was there for almost
seven years and we did a lot of interesting co-design things, lots of software co-design,
which is easy to talk about.
We did do some hardware stuff, which I can sort of talk about, but I'll explain the uses
of this code.
I'm fairly proud of this.
Tim Matson retired, so I'm the curator and maintainer and developer at this point.
But I think there's a lot of neat stuff in here and I think it's a really useful tool
for computer scientists.
I think it's at this point somewhat unique and I'll talk about the landscape for that.
Yeah, so Tim Matson created this sometime starting in the 1990s and Tim was motivated
to understand computer architecture.
And of course, in the 1990s, parallel computing wasn't exactly the thing it is today.
Nowadays everything is SIMD, everything is multi-core.
But back in the day, things were a little bit simpler.
And so some of the kernels were really a sequential thing.
There's a branch prediction kernel, which I think there are some interesting generalizations
of that, but that just shows you sort of how the scope was trying to understand architecture
very generally.
And then a couple of years before I got to Intel, Ravinder Wingart, who's now my Nvidia
colleague, worked on developing the initial parallel implementations with MPI and OpenMP
as they existed in the good old days, CPU only simple stuff, but focused on implementing
the algorithms and designing the parallel scheme, which is something I built upon.
And so when I joined, we were interested in exascale and programming models in general.
And so we did a lot of study, which then spawned some new ideas, and then I sort of
did some things I'll talk about.
So a lot of people have contributed.
Obviously, a bunch of people from Intel contributed because we try very hard to be lazy.
One of the things we're trying to do with this project is have it be a fair study from
the standpoint of expertise, which means Tim knows a lot about OpenMP, I know a lot about
And if we wrote MPI and OpenMP and then we also wrote all the code in UPC or Charm++,
then that'd be an unfair comparison.
So we enlisted a lot of these folks like Jacob Nelson at the University of Washington and
others to help us, or we wrote code and then we sent it to folks and said, tell us what
we did wrong.
And I've been really fortunate.
There's some folks, I don't know if Karsten Bauer is on the call or if he graduated, but
I believe he contributed some really good Rust stuff.
If Rust or Julia, I'm sorry, I can't remember all the stuff now, but some really good stuff.
And there's been a lot of contributions lately, especially on Rust and Julia, that we could
have done.
So thanks everybody who contributed.
And this should tell you, we are a pretty open project.
If you're interested in playing around, it's both easy to contribute and we make it easy
for people who are interested.
Presenters
Zugänglich über
Offener Zugang
Dauer
00:59:26 Min
Aufnahmedatum
2025-03-03
Hochgeladen am
2025-03-03 11:36:04
Sprache
en-US